Neural operators, which emerge as implicit solution operators of hidden governing equations, have recently become popular tools for learning responses of complex real-world physical systems. Nevertheless, the majority of neural operator applications has thus far been data-driven, which neglects the intrinsic preservation of fundamental physical laws in data. In this paper, we introduce a novel integral neural operator architecture, to learn physical models with fundamental conservation laws automatically guaranteed. In particular, by replacing the frame-dependent position information with its invariant counterpart in the kernel space, the proposed neural operator is by design translation- and rotation-invariant, and consequently abides by the conservation laws of linear and angular momentums. As applications, we demonstrate the expressivity and efficacy of our model in learning complex material behaviors from both synthetic and experimental datasets, and show that, by automatically satisfying these essential physical laws, our learned neural operator is not only generalizable in handling translated and rotated datasets, but also achieves state-of-the-art accuracy and efficiency as compared to baseline neural operator models.
translated by 谷歌翻译
Object movement identification is one of the most researched problems in the field of computer vision. In this task, we try to classify a pixel as foreground or background. Even though numerous traditional machine learning and deep learning methods already exist for this problem, the two major issues with most of them are the need for large amounts of ground truth data and their inferior performance on unseen videos. Since every pixel of every frame has to be labeled, acquiring large amounts of data for these techniques gets rather expensive. Recently, Zhao et al. [1] proposed one of a kind Arithmetic Distribution Neural Network (ADNN) for universal background subtraction which utilizes probability information from the histogram of temporal pixels and achieves promising results. Building onto this work, we developed an intelligent video surveillance system that uses ADNN architecture for motion detection, trims the video with parts only containing motion, and performs anomaly detection on the trimmed video.
translated by 谷歌翻译
The machine translation mechanism translates texts automatically between different natural languages, and Neural Machine Translation (NMT) has gained attention for its rational context analysis and fluent translation accuracy. However, processing low-resource languages that lack relevant training attributes like supervised data is a current challenge for Natural Language Processing (NLP). We incorporated a technique known Active Learning with the NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of low-resource language translation. With active learning, a semi-supervised machine learning strategy, the training algorithm determines which unlabeled data would be the most beneficial for obtaining labels using selected query techniques. We implemented two model-driven acquisition functions for selecting the samples to be validated. This work uses transformer-based NMT systems; baseline model (BM), fully trained model (FTM) , active learning least confidence based model (ALLCM), and active learning margin sampling based model (ALMSM) when translating English to Hindi. The Bilingual Evaluation Understudy (BLEU) metric has been used to evaluate system results. The BLEU scores of BM, FTM, ALLCM and ALMSM systems are 16.26, 22.56 , 24.54, and 24.20, respectively. The findings in this paper demonstrate that active learning techniques helps the model to converge early and improve the overall quality of the translation system.
translated by 谷歌翻译
Operations research deals with modeling and solving real-world problems as mathematical optimization problems. While solving mathematical systems is accomplished by analytical software, formulating a problem as a set of mathematical operations has been typically done manually by domain experts. However, recent machine learning models have shown promise in converting textual problem descriptions to corresponding mathematical formulations. In this paper, we present an approach that converts linear programming word problems into meaning representations that are structured and can be used by optimization solvers. Our approach uses the named entity-based enrichment to augment the input and achieves state-of-the-art accuracy, winning the second task of the NL4Opt competition (https://nl4opt.github.io).
translated by 谷歌翻译
Large pre-trained models, such as Bert, GPT, and Wav2Vec, have demonstrated great potential for learning representations that are transferable to a wide variety of downstream tasks . It is difficult to obtain a large quantity of supervised data due to the limited availability of resources and time. In light of this, a significant amount of research has been conducted in the area of adopting large pre-trained datasets for diverse downstream tasks via fine tuning, linear probing, or prompt tuning in low resource settings. Normalization techniques are essential for accelerating training and improving the generalization of deep neural networks and have been successfully used in a wide variety of applications. A lot of normalization techniques have been proposed but the success of normalization in low resource downstream NLP and speech tasks is limited. One of the reasons is the inability to capture expressiveness by rescaling parameters of normalization. We propose KullbackLeibler(KL) Regularized normalization (KL-Norm) which make the normalized data well behaved and helps in better generalization as it reduces over-fitting, generalises well on out of domain distributions and removes irrelevant biases and features with negligible increase in model parameters and memory overheads. Detailed experimental evaluation on multiple low resource NLP and speech tasks, demonstrates the superior performance of KL-Norm as compared to other popular normalization and regularization techniques.
translated by 谷歌翻译
Neural Networks (GNNs) have revolutionized the molecular discovery to understand patterns and identify unknown features that can aid in predicting biophysical properties and protein-ligand interactions. However, current models typically rely on 2-dimensional molecular representations as input, and while utilization of 2\3- dimensional structural data has gained deserved traction in recent years as many of these models are still limited to static graph representations. We propose a novel approach based on the transformer model utilizing GNNs for characterizing dynamic features of protein-ligand interactions. Our message passing transformer pre-trains on a set of molecular dynamic data based off of physics-based simulations to learn coordinate construction and make binding probability and affinity predictions as a downstream task. Through extensive testing we compare our results with the existing models, our MDA-PLI model was able to outperform the molecular interaction prediction models with an RMSE of 1.2958. The geometric encodings enabled by our transformer architecture and the addition of time series data add a new dimensionality to this form of research.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
最近在生物医学中大型数据集的可用性激发了多种医疗保健应用的代表性学习方法的开发。尽管预测性能取得了进步,但这种方法的临床实用性在暴露于现实世界数据时受到限制。在这里,我们开发模型诊断措施,以检测部署过程中潜在的陷阱,而无需访问外部数据。具体而言,我们专注于通过数据转换建模电生理信号(EEG)的现实数据转移,并通过分析a)模型的潜在空间和b)预测性不确定性在这些变换下扩展了常规的基于任务的评估。我们使用公开可用的大规模临床EEG进行了多个EEG功能编码器和两个临床相关的下游任务进行实验。在这种实验环境中,我们的结果表明,在提出的数据转移下,潜在空间完整性和模型不确定性的度量可能有助于预测部署过程中的性能退化。
translated by 谷歌翻译
当今智能城市中产生的大型视频数据从其有目的的用法角度引起了人们的关注,其中监视摄像机等是最突出的资源,是为大量数据做出贡献的最突出的资源,使其自动化分析成为计算方面的艰巨任务。和精确。暴力检测(VD)在行动和活动识别域中广泛崩溃,用于分析大型视频数据,以了解由于人类而引起的异常动作。传统上,VD文献基于手动设计的功能,尽管开发了基于深度学习的独立模型的进步用于实时VD分析。本文重点介绍了深度序列学习方法以及检测到的暴力的本地化策略。该概述还介入了基于机器学习的初始图像处理和基于机器学习的文献及其可能具有的优势,例如针对当前复杂模型的效率。此外,讨论了数据集,以提供当前模型的分析,并用对先前方法的深入分析得出的VD域中的未来方向解释了他们的利弊。
translated by 谷歌翻译
深度学习(DL)模型越来越多地为应用程序提供多种应用。不幸的是,这种普遍性也使它们成为提取攻击的有吸引力的目标,这些目标可以窃取目标DL模型的体系结构,参数和超参数。现有的提取攻击研究观察到不同DL模型和数据集的攻击成功水平不同,但其易感性背后的根本原因通常仍不清楚。确定此类根本原因弱点将有助于促进安全的DL系统,尽管这需要在各种情况下研究提取攻击,以确定跨攻击成功和DL特征的共同点。理解,实施和评估甚至单一攻击所需的绝大部分技术努力和时间都使探索现有的大量独特提取攻击方案是不可行的,当前框架通常设计用于仅针对特定攻击类型,数据集和数据集,以及硬件平台。在本文中,我们介绍捏:一个有效且自动化的提取攻击框架,能够在异质硬件平台上部署和评估多个DL模型和攻击。我们通过经验评估大量先前未开发的提取攻击情景以及次级攻击阶段来证明捏合的有效性。我们的主要发现表明,1)多个特征影响开采攻击成功跨越DL模型体系结构,数据集复杂性,硬件,攻击类型和2)部分成功的提取攻击显着增强了进一步的对抗攻击分期的成功。
translated by 谷歌翻译